11 research outputs found

    Interactive IIoT-Based 5DOF Robotic Arm for Upper Limb Telerehabilitation

    Get PDF
    Significant advancements in contemporary telemedicine applications enforce the demand for effective and intuitive telerehabilitation tools. Telerehabilitation can minimize the distance, travel burden, and costs between rehabilitative patients and therapists. This research introduces an interactive novel telerehabilitation system that integrates the Industrial Internet of Things (IIoT) platform with a robotic manipulator named xARm-5, aiming to deliver rehabilitation therapies to individuals with upper limb dysfunctions. With the proposed system, a therapist can provide upper limb rehab exercises remotely using an augmented reality (AR) user interface (UI) developed using Vuforia Studio, which transmits bidirectional data through the IIoT platform. The proposed system has a stable communication architecture and low teleoperation latency. Experimental results revealed that with the developed telerehabilitation framework, the xArm-5 could be teleoperated from the developed AR platform and/or use a joystick to provide standard upper limb rehab exercises. Besides, with the designed AR-based UI, a therapist can monitor rehab/robot trajectories along with the AR digital twin of the robot, ensuring that the robot is providing passive therapy for shoulder and elbow movements

    A Novel Framework for Mixed Reality–Based Control of Collaborative Robot: Development Study

    Get PDF
    Background: Applications of robotics in daily life are becoming essential by creating new possibilities in different fields, especially in the collaborative environment. The potentials of collaborative robots are tremendous as they can work in the same workspace as humans. A framework employing a top-notch technology for collaborative robots will surely be worthwhile for further research. Objective: This study aims to present the development of a novel framework for the collaborative robot using mixed reality. Methods: The framework uses Unity and Unity Hub as a cross-platform gaming engine and project management tool to design the mixed reality interface and digital twin. It also uses the Windows Mixed Reality platform to show digital materials on holographic display and the Azure mixed reality services to capture and expose digital information. Eventually, it uses a holographic device (HoloLens 2) to execute the mixed reality–based collaborative system. Results: A thorough experiment was conducted to validate the novel framework for mixed reality–based control of a collaborative robot. This framework was successfully applied to implement a collaborative system using a 5–degree of freedom robot (xArm-5) in a mixed reality environment. The framework was stable and worked smoothly throughout the collaborative session. Due to the distributed nature of cloud applications, there is a negligible latency between giving a command and the execution of the physical collaborative robot. Conclusions: Opportunities for collaborative robots in telerehabilitation and teleoperation are vital as in any other field. The proposed framework was successfully applied in a collaborative session, and it can also be applied in other similar potential applications for robust and more promising performance

    Control of a Wheelchair-Mounted 6DOF Assistive Robot With Chin and Finger Joysticks

    Get PDF
    Throughout the last decade, many assistive robots for people with disabilities have been developed; however, researchers have not fully utilized these robotic technologies to entirely create independent living conditions for people with disabilities, particularly in relation to activities of daily living (ADLs). An assistive system can help satisfy the demands of regular ADLs for people with disabilities. With an increasing shortage of caregivers and a growing number of individuals with impairments and the elderly, assistive robots can help meet future healthcare demands. One of the critical aspects of designing these assistive devices is to improve functional independence while providing an excellent human–machine interface. People with limited upper limb function due to stroke, spinal cord injury, cerebral palsy, amyotrophic lateral sclerosis, and other conditions find the controls of assistive devices such as power wheelchairs difficult to use. Thus, the objective of this research was to design a multimodal control method for robotic self-assistance that could assist individuals with disabilities in performing self-care tasks on a daily basis. In this research, a control framework for two interchangeable operating modes with a finger joystick and a chin joystick is developed where joysticks seamlessly control a wheelchair and a wheelchair-mounted robotic arm. Custom circuitry was developed to complete the control architecture. A user study was conducted to test the robotic system. Ten healthy individuals agreed to perform three tasks using both (chin and finger) joysticks for a total of six tasks with 10 repetitions each. The control method has been tested rigorously, maneuvering the robot at different velocities and under varying payload (1–3.5 lb) conditions. The absolute position accuracy was experimentally found to be approximately 5 mm. The round-trip delay we observed between the commands while controlling the xArm was 4 ms. Tests performed showed that the proposed control system allowed individuals to perform some ADLs such as picking up and placing items with a completion time of less than 1 min for each task and 100% success

    Hand Rehabilitation Devices: A Comprehensive Systematic Review

    No full text
    A cerebrovascular accident, or a stroke, can cause significant neurological damage, inflicting the patient with loss of motor function in their hands. Standard rehabilitation therapy for the hand increases demands on clinics, creating an avenue for powered hand rehabilitation devices. Hand rehabilitation devices (HRDs) are devices designed to provide the hand with passive, active, and active-assisted rehabilitation therapy; however, HRDs do not have any standards in terms of development or design. Although the categorization of an injury’s severity can guide a patient into seeking proper assistance, rehabilitation devices do not have a set standard to provide a solution from the beginning to the end stages of recovery. In this paper, HRDs are defined and compared by their mechanical designs, actuation mechanisms, control systems, and therapeutic strategies. Furthermore, devices with conducted clinical trials are used to determine the future development of HRDs. After evaluating the abilities of 35 devices, it is inferred that standard characteristics for HRDs should include an exoskeleton design, the incorporation of challenge-based and coaching therapeutic strategies, and the implementation of surface electromyogram signals (sEMG) based control

    Gamification of Upper Limb Rehabilitation in Mixed-Reality Environment

    No full text
    The advancements in mixed reality (MR) technology in recent years have provided us with excellent prospects for creating novel approaches to supplement conventional physiotherapy to maintain a sufficient quantity and quality of rehabilitation. The use of MR systems to facilitate patients’ participation in intensive, repetitive, and task-oriented practice using cutting-edge technologies to enhance functionality and facilitate recovery is very encouraging. Multiple studies have found that patients who undergo therapy using MR experience significant improvements in upper limb function; however, assessing the efficacy of MR is challenging due to the wide variety of methods and tools used. Because of these challenges, a novel approach, gamified MR-based solution for upper extremity rehabilitation, is proposed, which is an MR application for the Microsoft HoloLens 2, complete with game levels, and can measure the ranges of motion of the arm joints. The proposed rehabilitative system’s functionality and usability were evaluated with ten healthy adult participants with no prior arm-related injuries and two occupational therapists (OTs). The system successfully provided rehab exercises for upper limb injuries through interactive mixed-reality games. The system can mimic upper limb behavior without additional sensors during rehab sessions. Unlike previously researched technologically-based rehabilitation methods, this method can integrate arm–joint data within the application and are independent of one another. The results and comparisons show that this system is relevant, accurate, and superior to previous VR-based rehabilitation methods because the VR-based system is blind to the surroundings, whereas the proposed approach has spatial awareness of the environment

    A Comprehensive Review of Vision-Based Robotic Applications: Current State, Components, Approaches, Barriers, and Potential Solutions

    No full text
    Being an emerging technology, robotic manipulation has encountered tremendous advancements due to technological developments starting from using sensors to artificial intelligence. Over the decades, robotic manipulation has advanced in terms of the versatility and flexibility of mobile robot platforms. Thus, robots are now capable of interacting with the world around them. To interact with the real world, robots require various sensory inputs from their surroundings, and the use of vision is rapidly increasing nowadays, as vision is unquestionably a rich source of information for a robotic system. In recent years, robotic manipulators have made significant progress towards achieving human-like abilities. There is still a large gap between human and robot dexterity, especially when it comes to executing complex and long-lasting manipulations. This paper comprehensively investigates the state-of-the-art development of vision-based robotic application, which includes the current state, components, and approaches used along with the algorithms with respect to the control and application of robots. Furthermore, a comprehensive analysis of those vision-based applied algorithms, their effectiveness, and their complexity has been enlightened here. To conclude, there is a discussion over the constraints while performing the research and potential solutions to develop a robust and accurate vision-based robot manipulation

    A Novel Framework for Mixed Reality–Based Control of Collaborative Robot: Development Study

    No full text
    BackgroundApplications of robotics in daily life are becoming essential by creating new possibilities in different fields, especially in the collaborative environment. The potentials of collaborative robots are tremendous as they can work in the same workspace as humans. A framework employing a top-notch technology for collaborative robots will surely be worthwhile for further research. ObjectiveThis study aims to present the development of a novel framework for the collaborative robot using mixed reality. MethodsThe framework uses Unity and Unity Hub as a cross-platform gaming engine and project management tool to design the mixed reality interface and digital twin. It also uses the Windows Mixed Reality platform to show digital materials on holographic display and the Azure mixed reality services to capture and expose digital information. Eventually, it uses a holographic device (HoloLens 2) to execute the mixed reality–based collaborative system. ResultsA thorough experiment was conducted to validate the novel framework for mixed reality–based control of a collaborative robot. This framework was successfully applied to implement a collaborative system using a 5–degree of freedom robot (xArm-5) in a mixed reality environment. The framework was stable and worked smoothly throughout the collaborative session. Due to the distributed nature of cloud applications, there is a negligible latency between giving a command and the execution of the physical collaborative robot. ConclusionsOpportunities for collaborative robots in telerehabilitation and teleoperation are vital as in any other field. The proposed framework was successfully applied in a collaborative session, and it can also be applied in other similar potential applications for robust and more promising performance

    Current Designs of Robotic Arm Grippers: A Comprehensive Systematic Review

    No full text
    Recent technological advances enable gripper-equipped robots to perform many tasks traditionally associated with the human hand, allowing the use of grippers in a wide range of applications. Depending on the application, an ideal gripper design should be affordable, energy-efficient, and adaptable to many situations. However, regardless of the number of grippers available on the market, there are still many tasks that are difficult for grippers to perform, which indicates the demand and room for new designs to compete with the human hand. Thus, this paper provides a comprehensive review of robotic arm grippers to identify the benefits and drawbacks of various gripper designs. The research compares gripper designs by considering the actuation mechanism, degrees of freedom, grasping capabilities with multiple objects, and applications, concluding which should be the gripper design with the broader set of capabilities

    Optimal Base Placement of a 6-DOFs Robot to Cover Essential Activities of Daily Living

    Get PDF
    The number of individuals with upper or lower extremities dysfunction (ULED) has considerably increased in the past decades resulting in a high economic burden for the families and society. Individuals with ULEDs require assistive robots to fulfill all their activities of daily living (ADLs). Thus, this research presents an objective function for base placement optimization of assistive robots to increase the workspace required to fulfill ADLs (workspace coverage). The workspace coverage was determined using the xArm 6 robot and experimenting with different ADLs. Also, an object collision algorithm is implemented to avoid collisions between the robot and the user within the workspace. Moreover, the algorithm determines the existence of singularities within the workspace by computing the manipulability index. However, since the manipulability index computation depends on the Jacobian matrix’s eigenvalues yielding incongruences in the units, we divided the Jacobian matrix into two parts; one for the angular and another for the linear velocity. Finally, using the objective function with a genetic algorithm (GA), the optimal base placement for the robot is obtained and validated experimentally

    A Novel Multi-Modal Teleoperation of a Humanoid Assistive Robot with Real-Time Motion Mimic

    No full text
    This research shows the development of a teleoperation system with an assistive robot (NAO) through a Kinect V2 sensor, a set of Meta Quest virtual reality glasses, and Nintendo Switch controllers (Joycons), with the use of the Robot Operating System (ROS) framework to implement the communication between devices. In this paper, two interchangeable operating models are proposed. An exclusive controller is used to control the robot’s movement to perform assignments that require long-distance travel. Another teleoperation protocol uses the skeleton joints information readings by the Kinect sensor, the orientation of the Meta Quest, and the button press and thumbstick movements of the Joycons to control the arm joints and head of the assistive robot, and its movement in a limited area. They give image feedback to the operator in the VR glasses in a first-person perspective and retrieve the user’s voice to be spoken by the assistive robot. Results are promising and can be used for educational and therapeutic purposes
    corecore